cross-scale attention mechanism
Cross-scale Multi-instance Learning for Pathological Image Diagnosis
Deng, Ruining, Cui, Can, Remedios, Lucas W., Bao, Shunxing, Womick, R. Michael, Chiron, Sophie, Li, Jia, Roland, Joseph T., Lau, Ken S., Liu, Qi, Wilson, Keith T., Wang, Yaohong, Coburn, Lori A., Landman, Bennett A., Huo, Yuankai
Analyzing high resolution whole slide images (WSIs) with regard to information across multiple scales poses a significant challenge in digital pathology. Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects (i.e. sets of smaller image patches). However, such processing is typically performed at a single scale (e.g., 20x magnification) of WSIs, disregarding the vital inter-scale information that is key to diagnoses by human pathologists. In this study, we propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis. The contribution of this paper is three-fold: (1) A novel cross-scale MIL (CS-MIL) algorithm that integrates the multi-scale information and the inter-scale relationships is proposed; (2) A toy dataset with scale-specific morphological features is created and released to examine and visualize differential cross-scale attention; (3) Superior performance on both in-house and public datasets is demonstrated by our simple cross-scale MIL strategy. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > North Carolina > Orange County > Chapel Hill (0.04)
- Asia > Singapore (0.04)
- Health & Medicine > Diagnostic Medicine (0.93)
- Health & Medicine > Therapeutic Area > Gastroenterology (0.68)
- Health & Medicine > Therapeutic Area > Oncology (0.68)
Cross-scale Attention Guided Multi-instance Learning for Crohn's Disease Diagnosis with Pathological Images
Deng, Ruining, Cui, Can, Remedios, Lucas W., Bao, Shunxing, Womick, R. Michael, Chiron, Sophie, Li, Jia, Roland, Joseph T., Lau, Ken S., Liu, Qi, Wilson, Keith T., Wang, Yaohong, Coburn, Lori A., Landman, Bennett A., Huo, Yuankai
Multi-instance learning (MIL) is widely used in the computer-aided interpretation of pathological Whole Slide Images (WSIs) to solve the lack of pixel-wise or patch-wise annotations. Often, this approach directly applies "natural image driven" MIL algorithms which overlook the multi-scale (i.e. pyramidal) nature of WSIs. Off-the-shelf MIL algorithms are typically deployed on a single-scale of WSIs (e.g., 20x magnification), while human pathologists usually aggregate the global and local patterns in a multi-scale manner (e.g., by zooming in and out between different magnifications). In this study, we propose a novel cross-scale attention mechanism to explicitly aggregate inter-scale interactions into a single MIL network for Crohn's Disease (CD), which is a form of inflammatory bowel disease. The contribution of this paper is two-fold: (1) a cross-scale attention mechanism is proposed to aggregate features from different resolutions with multi-scale interaction; and (2) differential multi-scale attention visualizations are generated to localize explainable lesion patterns. By training ~250,000 H&E-stained Ascending Colon (AC) patches from 20 CD patient and 30 healthy control samples at different scales, our approach achieved a superior Area under the Curve (AUC) score of 0.8924 compared with baseline models. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.
- North America > United States > Tennessee > Davidson County > Nashville (0.05)
- North America > United States > North Carolina > Orange County > Chapel Hill (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Gastroenterology (1.00)